118 research outputs found
Selección de cartera óptima como aplicación de las condiciones Karush-Kuhn-Tuker
Treballs Finals de Grau de Matemà tiques, Facultat de Matemà tiques, Universitat de Barcelona, Any: 2019, Director: José Manuel Corcuera Valverde[en] The Karush-Kuhn-Tucker conditions (in short, the KKT conditions), an extension of the well-known Lagrange multipliers method, have been developed to solve optimization problems in a more general sense, that is, including both inequalities and constraints. On the other hand, the selection of an optimal portfolio conforming the requirements of each investor, requesting a maximum return, a minimum risk or a balance between these two aspects, can be solved with the application of the KKT conditions
FD-GAN: Generative Adversarial Networks with Fusion-discriminator for Single Image Dehazing
Recently, convolutional neural networks (CNNs) have achieved great
improvements in single image dehazing and attained much attention in research.
Most existing learning-based dehazing methods are not fully end-to-end, which
still follow the traditional dehazing procedure: first estimate the medium
transmission and the atmospheric light, then recover the haze-free image based
on the atmospheric scattering model. However, in practice, due to lack of
priors and constraints, it is hard to precisely estimate these intermediate
parameters. Inaccurate estimation further degrades the performance of dehazing,
resulting in artifacts, color distortion and insufficient haze removal. To
address this, we propose a fully end-to-end Generative Adversarial Networks
with Fusion-discriminator (FD-GAN) for image dehazing. With the proposed
Fusion-discriminator which takes frequency information as additional priors,
our model can generator more natural and realistic dehazed images with less
color distortion and fewer artifacts. Moreover, we synthesize a large-scale
training dataset including various indoor and outdoor hazy images to boost the
performance and we reveal that for learning-based dehazing methods, the
performance is strictly influenced by the training data. Experiments have shown
that our method reaches state-of-the-art performance on both public synthetic
datasets and real-world images with more visually pleasing dehazed results.Comment: Accepted by AAAI2020 (with supplementary files
CP3: Unifying Point Cloud Completion by Pretrain-Prompt-Predict Paradigm
Point cloud completion aims to predict complete shape from its partial
observation. Current approaches mainly consist of generation and refinement
stages in a coarse-to-fine style. However, the generation stage often lacks
robustness to tackle different incomplete variations, while the refinement
stage blindly recovers point clouds without the semantic awareness. To tackle
these challenges, we unify point cloud Completion by a generic
Pretrain-Prompt-Predict paradigm, namely CP3. Inspired by prompting approaches
from NLP, we creatively reinterpret point cloud generation and refinement as
the prompting and predicting stages, respectively. Then, we introduce a concise
self-supervised pretraining stage before prompting. It can effectively increase
robustness of point cloud generation, by an Incompletion-Of-Incompletion (IOI)
pretext task. Moreover, we develop a novel Semantic Conditional Refinement
(SCR) network at the predicting stage. It can discriminatively modulate
multi-scale refinement with the guidance of semantics. Finally, extensive
experiments demonstrate that our CP3 outperforms the state-of-the-art methods
with a large margin
Towards Mitigating Spurious Correlations in the Wild: A Benchmark and a more Realistic Dataset
Deep neural networks often exploit non-predictive features that are
spuriously correlated with class labels, leading to poor performance on groups
of examples without such features. Despite the growing body of recent works on
remedying spurious correlations, the lack of a standardized benchmark hinders
reproducible evaluation and comparison of the proposed solutions. To address
this, we present SpuCo, a python package with modular implementations of
state-of-the-art solutions enabling easy and reproducible evaluation of current
methods. Using SpuCo, we demonstrate the limitations of existing datasets and
evaluation schemes in validating the learning of predictive features over
spurious ones. To overcome these limitations, we propose two new vision
datasets: (1) SpuCoMNIST, a synthetic dataset that enables simulating the
effect of real world data properties e.g. difficulty of learning spurious
feature, as well as noise in the labels and features; (2) SpuCoAnimals, a
large-scale dataset curated from ImageNet that captures spurious correlations
in the wild much more closely than existing datasets. These contributions
highlight the shortcomings of current methods and provide a direction for
future research in tackling spurious correlations. SpuCo, containing the
benchmark and datasets, can be found at https://github.com/BigML-CS-UCLA/SpuCo,
with detailed documentation available at
https://spuco.readthedocs.io/en/latest/.Comment: Package: https://github.com/BigML-CS-UCLA/SpuC
Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression
Contrastive learning (CL) has emerged as a powerful technique for
representation learning, with or without label supervision. However, supervised
CL is prone to collapsing representations of subclasses within a class by not
capturing all their features, and unsupervised CL may suppress harder
class-relevant features by focusing on learning easy class-irrelevant features;
both significantly compromise representation quality. Yet, there is no
theoretical understanding of \textit{class collapse} or \textit{feature
suppression} at \textit{test} time. We provide the first unified theoretically
rigorous framework to determine \textit{which} features are learnt by CL. Our
analysis indicate that, perhaps surprisingly, bias of (stochastic) gradient
descent towards finding simpler solutions is a key factor in collapsing
subclass representations and suppressing harder class-relevant features.
Moreover, we present increasing embedding dimensionality and improving the
quality of data augmentations as two theoretically motivated solutions to
{feature suppression}. We also provide the first theoretical explanation for
why employing supervised and unsupervised CL together yields higher-quality
representations, even when using commonly-used stochastic gradient methods.Comment: to appear at ICML 202
Timing the Transient Execution: A New Side-Channel Attack on Intel CPUs
The transient execution attack is a type of attack leveraging the
vulnerability of modern CPU optimization technologies. New attacks surface
rapidly. The side-channel is a key part of transient execution attacks to leak
data. In this work, we discover a vulnerability that the change of the EFLAGS
register in transient execution may have a side effect on the Jcc (jump on
condition code) instruction after it in Intel CPUs. Based on our discovery, we
propose a new side-channel attack that leverages the timing of both transient
execution and Jcc instructions to deliver data. This attack encodes secret data
to the change of register which makes the execution time of context slightly
slower, which can be measured by the attacker to decode data. This attack
doesn't rely on the cache system and doesn't need to reset the EFLAGS register
manually to its initial state before the attack, which may make it more
difficult to detect or mitigate. We implemented this side-channel on machines
with Intel Core i7-6700, i7-7700, and i9-10980XE CPUs. In the first two
processors, we combined it as the side-channel of the Meltdown attack, which
could achieve 100\% success leaking rate. We evaluate and discuss potential
defenses against the attack. Our contributions include discovering security
vulnerabilities in the implementation of Jcc instructions and EFLAGS register
and proposing a new side-channel attack that does not rely on the cache system
- …